An overview as per December 2021
Sigrid Keydana
Privacy- and human rights-related legislation: Foundations
Privacy- and human rights-related legislation: Drafts / Proposals (as per 12/2021)
The Proposed Artificial Intelligence Act
Issued by the Council of Europe (founded in 1949)
Legal institution: European Court of Human Rights (ECtHR, Strasbourg).
Article 8 – Right to respect for private and family life
Everyone has the right to respect for his private and family life, his home and his correspondence.
There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
Effective as of entry into force of Treaty of Lisbon (2009)
Enforced by: Court of Justice of the European Union (CJEU, Luxembourg)
Article 7 - Respect for private and family life
Everyone has the right to respect for his or her private and family life, home and communications.
Article 8 - Protection of personal data
Everyone has the right to the protection of personal data concerning him or her.
Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.
Compliance with these rules shall be subject to control by an independent authority.
applies whenever personal data are collected, used, or stored
rights:
absolute: to be informed, to access, to rectification, to data portability
restricted: to erasure, to restrict processing, to object
roles: controller, joint controller, processor
A Privacy Impact Assessment must always be conducted when the processing could result in a high risk to the rights and freedoms of natural persons. E.g.,
scoring/profiling,
automatic decisions which lead to legal consequences for those impacted,
systematic monitoring,
processing of special personal data,
the merging or combining of data which was gathered by various processes,
data transfer to countries outside the EU/EEC
will be repealed by ePrivacy Regulation once that’s been adopted
based on Article 16 and Article 114 of the Treaty on the Functioning of the European Union (TFEU)1, which lays out rules for the internal market. 2
rules on data retention, cookies, e-mail communication (among others)
criticized as step back compared to the GDPR
covers: re-use of public data; “data sharing services/intermediaries”; “data altruism”
where data includes personal data
“without prejudice to Regulation (EU) 2016/679 and Directive 2002/58/EC”
“Indeed, considering that data protection is a fundamental right guaranteed by Article 8 of the Charter, and taking into account that one of the main purposes of the GDPR is to provide data subjects with control over personal data relating to them, the EDPB reiterates that personal data cannot be considered as a “tradeable commodity”. An important consequence of this is that, even if the data subject can agree to the processing of his or her personal data, he or she cannot waive his or her fundamental rights.” 3
defines a layered set of responsibilities for intermediaries (network infrastructure), hosting services, online platforms, and “very large online platforms”
Concerns:4
allows for use of AI systems categorizing individuals from biometrics according to ethnicity, gender, as well as political or sexual orientation
allows emotion recognition
allows targeted advertising
sets up ex-ante rules for so-called gatekeepers, including
to refrain from combining personal data from different sources
to submit (on an annual basis) an independently audited description of any techniques deployed for profiling consumers
The general objective is to ‘ensure the proper functioning of the internal market by creating the conditions for the development and use of trustworthy artificial intelligence in the Union’ (impact assessment, p. 32).
The specific objectives are:
(i) to ensure that AI systems placed on the market and used are safe and respect existing rules on fundamental rights and Union values,
(ii) to ensure legal certainty to facilitate investment and innovation in AI,
(iii) to enhance governance and effective enforcement of existing rules on fundamental rights and safety requirements applicable to AI systems, and
(iv) to facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. 5
Risk-based:
Unacceptable risk: prohibited (absolutely or with exceptions)
High risk: catalog of requirements
Limited risk: transparency requirements
Minimal risk: “encourage” and “facilitate” voluntary codes of conduct
The placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm
“[a]n inaudible sound [played] in truck drivers’ cabins to push them to drive longer than healthy and safe [where] AI is used to find the frequency maximising this effect on drivers”
The placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm
“[a] doll with integrated voice assistant [which] encourages a minor to engage in progressively dangerous behavior or challenges in the guise of a fun or cool game”
Harm: does not consider cumulative harm
Intent:
hard to prove
does not consider dual use
does not consider the usual newspeak / marketing strategies
leaves out dynamics in the user base
Does not add much to existing EU law (Unfair Commercial Practices Directive)
The sale or use of AI systems used by or on behalf of public authorities, to generate trustworthiness scores and which lead to either unjustified or disproportionate treatment of individuals or groups, or detrimental treatment which, while justifiable and proportionate, occurs in an unrelated context from the input data
Problems: 8
What is an unrelated context is up to definition
Public authorities: What about, e.g., privately controlled delivery, telecommunications or transport?
Some uses of real-time biometric systems in publicly accessible spaces by law enforcement
Problems: 9
Only if by law enforcement
Real-time only
Exceptions: search for victims/missing children; threat to life or physical safety / terrorist attack; detection, localisation, identification or prosecution of a perpetrator or suspect of a crime with maximum sentence of at least 3 years
… in a number of defined applications, products and sectors.
Based on and entwined with the New Legislative Framework (NLF) (the New Approach when introduced in 1985), a common EU approach to the regulation of certain products such as lifts, medical devices, personal protective equipment and toys
1. biometric identification and categorisation (both remote and offline);
2. management and operation of critical infrastructure;
3. educational and vocational training;
4. employment, worker management and access to self-employment;
5. access to and enjoyment of essential services and benefits;
6. law enforcement;
7. migration, asylum and border management;
8. administration of justice and democracy.
What:
Risk management system; data quality criteria; accuracy; robustness and cybersecurity; technical documentation; logging; human oversight.
Who:
Providers, not users.
How:
Certification, or follow standards developed by 3 European Standardisation Organisations (ESOs).
Can add sub-areas within these areas, if similar risk to an existing in-scope application, but cannot add new areasentirely.
Datasets only need to meet requirements “sufficiently” and “in view of the intended purpose of the system”
No explicit discussion of leakage of training data or other personal data from models
Unclear if essential stakeholders will be involved in the standardisation process
Certification bodies are private bodies!
Restrictions are on providers, however the provider may not know how system is used.
For “limited-risk” applications, either provider or user have to fulfill transparency requirements.
Three categories are named:
Scientific models of emotion are under-complex, and data reflect this
Emotions, and their expression, are inseparable from social, cultural, situational context
Emotions are intimately related to human dignity
Emotions are often subject to moral judgement
In consequence, social credit effects arise
Risk-based approach is based on context; but in practice, models/applications are ported to different contexts.
Does not reflect important ways AI is being used, for example: No covering of Artificial Intelligence as a Service (AIaaS)12.
Does not apply to systems already in use.
Excludes international law enforcement cooperation.
Preemptive effect: Aims to ’“prevent unilateral Member States actions that risk to fragment the market and to impose even higher regulatory burdens […]’”.
Concrete lists of applications, combined with the massive lobbying that has been going on, means that this is the rule of a dominant party, not people impacted/affected, or minorities.
Leaves out risks for groups of individuals or the society as a whole.
No reference to the individual affected / no regard to human rights.
Not enough focus on data protection.
EDPB-EDPS Statement 03/2021 on the ePrivacy Regulation
https://www.cr-online.de/blog/2021/09/19/in-der-datenschutzrechtlichen-todeszone/.
BfDI kritisiert Position des Rats zur ePrivacy-Verordnung
EDPB-EDPS Statement 05/2021 on the Data Governance Act in light of the legislative developments
EDPB-EDPS Statement on the Digital Services Package and Data Strategy
EDPB-EDPS Opinion 2/2021 on the Proposal for a Digital Markets Act
Selbst, A. et al.: Fairness and Abstraction in Sociotechnical Systems.
Stark, L., Hoey, J.: The Ethics of Emotion in Artificial Intelligence Systems.
AI Impact Working Group @RStudio, PBC
The second of the two main treaties of the European Union, a continuation of the original Treaty of Rome (1958). The other is the treaty of Lisbon (2009), replacing the Treaty of Maastricht (1993).↩︎
as well as Article 7 of the Charter of Fundamental Rights↩︎
EDPB-EDPS Statement 05/2021 on the Data Governance Act in light of the legislative developments↩︎
https://edpb.europa.eu/system/files/2021-11/edpb_statement_on_the_digital_services_package_and_data_strategy_en.pdf↩︎
https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/694212/EPRS_BRI(2021)694212_EN.pdf↩︎
https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html↩︎
Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎
Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎
Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎
Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎
See Stark & Hoey, The Ethics of Emotion in Artificial Intelligence Systems.↩︎
See: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3824736).↩︎